Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

alt

Daniel Nashed

ID Vault password authentication to a Domino 14.5.0 ID Vault server fails with error: Illegal Security function code

Daniel Nashed – 27 August 2025 19:59:53

Domino 14.5 introduced an ID Vault incompatibility with older client versions.

The issue is described in the following technote.


https://support.hcl-software.com/csm?id=kb_article&sysparm_article=KB0122915

If you are looking into upgrading and have older clients, you should leave your ID Vault server on 14.0 for now.
Domino 14.5 FP1 is planned to be shipped soon  and will provide an option to keep the older algorithm.


Below is the current info from the technote. I will write up more information as soon we have 14.5 FP1 available.

But it would be expected to have a way to control if the new algorithm is used.


What is important: Once the ID is in the wrong format, an older client cannot access it any more.

The new format is written when the ID is updated.

If you have a current issue, you should open a support ticket.



Notes/Domino 14.5.0 released with a new, stronger, default algorithm setting used during the password authentication protocol to the ID Vault. This new default algorithm only has latent support in 14.0.x clients and servers.

Notes clients and Domino servers that are pre-14.0 do not have the latent support for the new 14.5.0 default algorithm setting used in the ID Vault password authentication transaction.

Therefore any ID Vault password authentication transaction consisting of one endpoint running pre-14.0 code and the other end 14.5.0 code could encounter the ID Vault password authentication failure "Illegal Security function code".


A LotusScript NotesHTTPRequest change in Domino 14.5 you should know

Daniel Nashed – 26 August 2025 21:08:49
There is a change in Domino 14.5, which is on by default.
In general this is a great move into the right direction. I would wish we would get this for clients too next.

But this would mean we need to push root certs to clients and they might need to have cross certs because users can update their local names.nsf documents.
The CA certs in Domino directory are the same by default. But if you have you own root certificates, you need to import it into the server's Domino directory.

https://help.hcl-software.com/domino/14.5.0/admin/wn_145_security_features.html#wn_145_security_features__section_dbj_sn4_cfc

Starting with Domino 14.5 server, the NotesHTTPRequest LotusScript class methods will, by default, load Trusted Roots from the Domino Directory when running on a Domino server. Previously, trusted root certificates were loaded from the file "cacerts.pem" in the Domino data directory. This new change doesn't impact NotesHTTPRequest LotusScript class behavior on the Notes client.


If you prefer to maintain the previous behavior and continue using "cacerts.pem", you can do so by setting the server-side Notes.ini parameter: NotesHTTPRequest_Use_CACerts=1


 Tools 

Very helpful Color Picker in Chrome

Daniel Nashed – 25 August 2025 22:05:44

Notes has a hidden color picker, which works on the whole screen with a trick -- when you are in a color field.
But it is hard to pick colors for example for small text which has a share.

There is a great tool, which works on Chrome without any add-ons.
It uses a magnifying function for picking colors and allows to disable he result in various formats.

https://pickcoloronline.com/


Image:Very helpful Color Picker in Chrome

A First Look at SUSE Leap/Enterprise 16 Shipping in Fall 2025

Daniel Nashed – 24 August 2025 09:07:19

SUSE is preparing to release Leap/Enterprise 16 in November 2025.

A release candidate is already available, although I haven’t seen a container image yet. Other vendors often release container images first to make testing easier.

SUSE skipped providing 15.7 for openSUSE Leap because version 16.0 is around the corner, although the enterprise container image of 15.7 is still available.


I gave Leap 16.0 a quick test in a Proxmox VM, and there were no major surprises in terms of compatibility—especially compared to Debian 13, which I tested recently.


SUSE Enterprise remains one of the two fully validated Linux distributions, along with Red Hat Enterprise, which makes this release candidate worth an early look.


Since SUSE Enterprise and openSUSE Leap share the same code base, I focused on Leap—it’s much easier to get hands-on access (Download:
https://get.opensuse.org/leap/16.0/)

Installation


The installation menu has been completely reworked.
It’s now cleaner, HTML-based, and even easier to navigate than in previous versions.


Package Versions


The included package versions look solid and align with what you’d expect from a modern distribution:

 
  • Kernel 6.12.0
  • glibc 2.40
  • OpenSSL 3.5.0 (8 Apr 2025)
  • curl 8.14.1

This should ensure compatibility with current workloads.


Docker Build/Run-Time Support


The official Docker convenience installer didn’t work.
However, SUSE 16.0 ships with an up-to-date Docker version: 28.3.2-ce.


My container setup script reported the failure of the convenience installer. Hopefully, Docker will update their scripts before SUSE 16.0 ships.
If not, I’ll add a work-around into my setup scripts.


Conclusion


From a Domino perspective, everything looks good so far.
The kernel and glibc versions are comparable to RHEL 10, so no surprises should be expected once SUSE 16 officially ships.


Personally, I’m moving toward Ubuntu for some use cases—mainly because of built-in ZFS support.
Still, openSUSE Leap/Enterprise 16 looks like it will be an excellent platform choice for an European based Enterprise Linux.


Domino 14.5 is not supported on brand new Debian 13 (Trixie)

Daniel Nashed – 23 August 2025 09:38:07

Debian 13 (Trixie) shipped
https://www.debian.org/News/2025/20250809

It comes with many packages updated to very recent versions.

One of the bigger changes is a very recent version of glibc 2.41.


There is a way to patch the affected binary. But it is not a good idea.

HCL will need to look into Debian 13 or glibc 2.41 support and you should not try to get this working on your own.


But because there people out there and post solutions and backgrounds anyhow.

Here are the details what is happening and why so that nobody needs to start speculating.


- Domino 14.5 ships JDK 21.0.6.

- Debian 13 ships JDK 21.0.8 which does not show the same error


But this glibc 2.41 might have other changes and this would need a complete new test in release.


Glibc is part of container images. This means running on a Debian 13 container host should still just work fine.


-- Daniel




Error message on Domino


Here is the error message you already see when invoking the java -version command and it makes the HTTP Task fail to load.


opt/hcl/domino/bin/java -version

JVMJ9VM011W Unable to load j9jit29: /opt/hcl/domino/notes/14050000/linux/jvm/lib/default/libj9jit29.so: cannot enable executable stack as shared object requires: Invalid argument

openjdk version "21.0.6" 2025-01-21 LTS

IBM Semeru Runtime Open Edition 21.0.6.0 (build 21.0.6+7-LTS)

Eclipse OpenJ9 VM 21.0.6.0 (build openj9-0.49.0, JRE 21 Linux amd64-64-Bit Compressed References 20250121_380 (JIT disabled, AOT disabled)

OpenJ9   - 3c3d179854

OMR      - e49875871

JCL      - e01368f00df based on jdk-21.0.6+7)



Debian 13 shipped Java 21 version


java -version

openjdk version "21.0.8" 2025-07-15

OpenJDK Runtime Environment (build 21.0.8+9-Debian-1)

OpenJDK 64-Bit Server VM (build 21.0.8+9-Debian-1, mixed mode, sharing)



Problem description and background


The problem turns out to be a problem with a change in glibc 2.41.


https://unix.stackexchange.com/questions/792460/dlopen-fails-after-debian-trixie-libc-transition-cannot-enable-executable-st

-- snip --

This is indeed a glibc change: in version 2.41, which migrated to Debian 13 on March 13 (replacing 2.40), dlopen no longer allows libraries requiring executable stacks to be loaded if the stack is not already executable:


dlopen and dlmopen no longer make the stack executable if a shared library requires it, either implicitly because of a missing GNU_STACK ELF header (and default ABI permission having the executable bit set) or explicitly because of the executable bit in GNU_STACK, and the stack is not already executable. Instead, loading such objects will fail.


-- snip --


There is a way to patch the affected binary. But this is not something I would advice you to use.



How to patch the affected binary


The shared lib is the only binary that needs to get patched -- from what it looks lilke.


I have just done a very basic test.


execstack -q /opt/hcl/domino/notes/14050000/linux/jvm/lib/default/libj9jit29.so



How to get the execstack on Debian 13


The execstack package has been removed and is not available on Debian 13

The following way works. But this isn't anything I would like to a Docker build pipeline nor would do in production without support.


apt-get update

apt-get install -y --no-install-recommends wget ca-certificates libelf1 libselinux1


wget -O /tmp/execstack.deb
https://archive.ubuntu.com/ubuntu/pool/universe/p/prelink/execstack_0.0.20130503-1.1_amd64.deb

dpkg -i /tmp/execstack.deb

execstack -V || true


 ACME  Buypass 

Buypass stops their ACME based services end of October 2025

Daniel Nashed – 18 August 2025 11:15:49
I just received the following mail -- which you should also get if you are a subscriber.

Buypass has decided to terminate the service for issuing TLS/SSL Certificates, including Go SSL (ACME). Certificates may be applied for until 15 October 2025.

The last issuance date will be 31 October 2025.

All certificates issued by 31 October 2025 will remain valid until they reach their expiry date or are revoked.



Here are the details about this termination of service ->
https://community.buypass.com/t/y4y130p
I have been using Bypass certificates for two reasons.


1.  I am using all ACME based CAs for testing in general, because I am involved in a ACME based solution
2. Buypass certificates are valid for 180 days instead of 90 days for Let's Encrypt


Where most admins are using Let's Encrypt, this will probably not impact many of you.


I will miss Bypass as one of my favorite ACME CAs. But I can understand the move.
It's a lot of effort to have a public and free ACME based service.



Restarting your Windows explorer with one line

Daniel Nashed – 17 August 2025 09:33:15

Desktop icons gone? Quick Windows fix (no reboot)
Sometimes Windows Explorer hangs and your desktop icons vanish.
In my case, restarting Explorer from Task Manager didn’t help.

ChatGPT, can with this one-liner—and it instantly brought my icons back without a reboot:


taskkill /f /im explorer.exe & start explorer.exe


My old friend Nagle hit me again when writing a Milter client

Daniel Nashed – 10 August 2025 09:56:30
I have not looked into Nagle for a very long time. In many environments, the default is off.
So I was really surprised when I did a simple loopback TCP connection that my communication was much slower than a UNIX socket.
I asked ChatGPT, and it came up with Nagle as a possible reason. I would not have thought about Nagle immediately, because I haven’t hit Nagle for a long time.

In my use case it really makes a difference. I am writing a Milter client to communicate with ClamAV and other Sendmail-compatible milters.
A milter sends commands step by step (like the sender, recipients, headers, and finally the body).

This conversation is much slower with Nagle enabled, and when sending the sender, recipients, some headers, and the EICAR test virus, the difference was 30 ms vs. 300 ms between using a UNIX socket and a TCP connection on the loopback interface.
The documentation says that UNIX sockets are much cheaper from a resource point of view. They don’t need a handshake—just opening a file descriptor.

But with Nagle disabled, the performance difference wasn’t noticeable anymore.
After setting TCP_NODELAY, the performance was almost the same.

It’s still a good idea to use a UNIX socket where possible, because it does reduce the load on the local TCP/IP stack—especially when opening and closing connections often.
I added the same syntax Milters use to distinct between TCP and Unix sockets.

The calls are almost the same, beside opening the connection. All communication uses the same system calls like read() and write().

Here is what helped to add to the C:

setsockopt(fd, IPPROTO_TCP, TCP_NODELAY, &one, sizeof(one));

Domino Start Script Diagnostic Improvements

Daniel Nashed – 1 August 2025 22:42:41

Some of the following additions made it already into an earlier version.
I actually started after Engage. During Engage I had a server hang and had to run, download and analyze NSDs from the hotel room.
This did lead to the first diagnostic additions.

  • NSDs are now at your fingertips. You can open the latest NSD and see when it was created.
  • If you have nshmailx installed, you can send NSDs directly by mail (independent from Domino with the small helper tool via SMTPS).
  • Today I added disk space information reading the translog, DAOS, FT and NIF settings from notes.ini. This was a missing part, which needed some thought which disks to show.

Domino process explorer


But on top of this I added a Domino process explorer to more interactively look into Domino processes, without creating a NSD.

I added this mainly for my own diagnostic and performance troubleshooting.
You can interactively pick processes, dump processes or just single threads in fast sequence -- including auto refresh from 1-9 seconds.
In addition you can toggle on/off interactively logging those stacks to a file.
It's using the GNU debugger (gdb) in a similar way NSD uses it.

New interactive navigation model


The interactive selection of processes is a new menu navigation model I am trying out.

This includes cursor up/down, selecting via "enter" and ESC to go back
It needs some tricks to put the session into raw mode for the time of the menu operations to read the extended keys (up/down cursor and ESC).


There are more details to discover. I am still working on details.

I have submitted the changes to the Domino Start Script and the Domino Container project in parallel.
This means the new script and the enhancements are already available in the container environment as well if you switch to the development branch.

---

Some of the changes are really targeted to the very senior admins to analyze problems.
But over all the diagnostic enhancements are benefiting everyone.

I added some of this for my own needs (helping customers to troubleshoot) and there is more coming.
What I would really like to do is semaphore annotation.
But this needs changes either in Domino, or I would need to provide an add-on annotation tool to annotate the semaphore time.


Let me know if this is helpful and what might be missing.


Image:Domino Start Script Diagnostic Improvements


Image:Domino Start Script Diagnostic Improvements


Image:Domino Start Script Diagnostic Improvements

Image:Domino Start Script Diagnostic Improvements


 Domino  ZFS  Proxmox 

Leveraging ZFS for Domino native or via Proxmox LXC containers

Daniel Nashed – 7 July 2025 07:06:18

ZFS is a very interesting file system for many reasons.
It offers compression, deduplication, snapshots, encryption and a very flexible volume manager.
ZFS is also the file system leveraged by Proxmox as the strategic file-system for local disks.

On Proxmox you can use ZFS in three different ways


1. Proxmox host level

2. LXC container as a direct mount without another file system in the LXC container

3. VM with a zvol which is kind of a raw device provided to the VM to add it's own file-system on top


The direct mount into the LXC container is a very interesting option which I tested before.

But now bringing ZFS to the picture this might make even more sense.


Of course this option only makes sense if you use ZFS native on Proxmox.

If you are running a larger Proxmox cluster, your storage is likely to use other options like Ceph.

But the following is also intended as food for thought to look into your own optimized storage.


One way that always works is to provide ZFS storage to a server over NFS.

NFS support is part of ZFS and allows to access ZFS over a network.


A simple configuration could look like the following.

This scenario would work with any machine which supports native ZFS.

It could be a Linux machine or a Proxmox host. Or an appliance like TrueNAS.



-- NFS Server --


Server Side on Ubuntu


Install packages for ZFS and NFS


apt install zfsutils-linux nfs-kernel-server


Create a pool and a volume with the right attributes for backup


zpool create tank /dev/sdb

zfs create -o mountpoint=/local/backup tank/backup

zfs set atime=off tank/backup

zfs set dedup=on tank/backup

zfs set recordsize=16K tank/backup



Enable NFS read/write sharing for the volume


zfs set sharenfs="rw=@192.168.96.42/32" tank/backup



Client Side on Ubuntu


Install package for NFS client


apt install nfs-common


Create a directory and mount the NFS volume (leaving out special attributes like noatime etc )


mkdir -p /local/backup

chown
notes:notes /local/backup
mount -t nfs 192.168.96.42:/local/backup /local/backup



The resulting performance for a Domino backup of larger NSF files:


Data Rate: 521.2 MB/sec



-- Proxmox LXC container mount --


If you are running a LXC container on Proxmox, you can create a ZFS volume and directly mount it into the LXC container without any additional overhead.


Create a mount with the right options


zfs create rpool/backup

zfs set atime=off rpool/backup

zfs set compression=lz4 rpool/backup

zfs set dedup=on rpool/backup

zfs set recordsize=16K rpool/backup

chown 101000:101000 /rpool/backup


Modify the settings of your LXC container for example /etc/pve/lxc/101.conf

Append the following type of line and restart your LXC container


lxc.mount.entry = /rpool/backup local/backup none bind,create=dir 0 0


Aligning the recordsize to 16K improves deduplication but reduces the performance a bit.

The performance is still almost double then using native NFS and the 16K block size is probably the better match.


Data Rate:   955.1 MB/sec  (with  16K recordsize)

Data Rate: 1,430.3 MB/sec  (with 128K recordsize)



Why is the performance better mounting a volume into the LXC container?


Using NFS the network connection is used. Even this is the local network on the same Proxmox host, this causes overheard and limits the performance to the speed of the network.

Leveraging the underlying ZFS directly does not have any overhead and provides the performance of the underlaying storage.


The fast SSDs used could provide much higher performance without de-duplication.

But this is de-duplicating ZFS write performance, which is quite impressive.


I have been using my mail file as test data. But in real life with more data the performance might drop. But this shows the potential of the setup.



Other benefits


Another big advantages using native ZFS volumes is the very flexible storage allocation.

Mounting a volume into the container allows you to use the full flexibility in contrast as you can see in below example.


The more I look into Proxmox and LXC containers the more I would want to conside LXC container on Proxmox for hosting Domino servers.


---


Example list of volumes:


root@pve:/rpool# zfs list

NAME                           USED  AVAIL  REFER  MOUNTPOINT

rpool                          482G  1.33T   104K  /rpool

rpool/ROOT                    4.46G  1.33T    96K  /rpool/ROOT

rpool/backup                  58.6G  1.33T  58.6G  /rpool/backup

rpool/data/subvol-100-disk-0  13.1G  86.9G  13.1G  /rpool/data/subvol-100-disk
-0



Deduplication status of the pool after a couple of backups:


zpool list

NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT

rpool  1.81T   441G  1.38T        -         -     3%    23%  
3.29x    ONLINE  -


Links

    Archives


    • [HCL Domino]
    • [Domino on Linux]
    • [Nash!Com]
    • [Daniel Nashed]